1. https://appdevelopermagazine.com/enterprise
  2. https://appdevelopermagazine.com/pre-production-app-monitoring-as-a-quality-assurance-tool/
4/24/2015 12:09:37 PM
Pre Production App Monitoring as a Quality Assurance Tool
Software Development,Application Monitoring,Performance Monitoring,KPIs
https://news-cdn.moonbeam.co/App-Performance-Monitoring-Orasi-App-Developer-Magazine_a6zd609h.jpg
App Developer Magazine
Pre Production App Monitoring as a Quality Assurance Tool

Enterprise

Pre Production App Monitoring as a Quality Assurance Tool


Friday, April 24, 2015

Jon Spencer Jon Spencer

With the increasing complexity of mobile/web applications and their service-oriented architectures, the elevation of user expectations to levels that were unimaginable only a few years ago, software development teams are more challenged than ever before. Budgets and release cycles are tight, and everyone from finance managers to marketing and sales staff want high-quality, market-ready products delivered faster and at a lower cost.

Unfortunately, meeting these goals can make it appear more difficult to integrate quality into the development process. The end result is far too many problems are identified and resolved in production. While some issues may only be resolved in production such as integrations with other composite applications, other issues can be addressed much sooner. 

One often overlooked mechanism to help with the quality effort is development-level application monitoring, along with analysis and reporting. (Note that for the remainder of this article, we will use “monitor” and “monitoring” as terms to encompass these activities and all related post-monitoring processes.)

Where Does Monitoring Fit In?

For software teams that engage in robust application process, performance monitoring and analysis can establish more meaningful baselines and KPIs. They can monitor for achievement of or deviances from these metrics, and use the insights they glean to improve the quality of software, either pre-release or for the next iteration. When used effectively, monitors can play a vital role in helping QA teams identify performance bottlenecks, transaction miscues and other predictive indicators of application disappointment or failure.

Consider HPLoadRunner, for example, a load-testing tool that offers a variety of monitors, including resource monitors, transaction monitors, run-time monitors and network monitors. HP LoadRunner’s transaction monitor provides metrics such as:

- Transaction Response Time

- Transactions per Second (Passed)

- Transactions per Second (Failed, Stopped)

- Total Transactions per Second (Passed)
 
These end-of-the-line metrics provide vital insight into transaction errors and enable QA teams to explore and resolve problems across the transaction lifecycle.

Another example is HP's Diagnostics Software, a tool that monitors application transaction health across the entire application lifecycle, in traditional, virtualized and cloud environments. It can help triangulate problems and uncover critical issues otherwise missed. This allows quicker isolation of bottlenecks or other issues during the root-cause analysis phase; information that can then be shared with DevOps for analysis and quick resolution.

More Monitoring, Please

In addition to third-party platform monitors, most computing architectures incorporate basic monitoring tools that are surprisingly valuable in development and QA. Used by computer technicians to resolve production-side problems in applications for years, these free, built in, client- or server-side OS tools can be put to good use. Securing valuable data will require more effort and ingenuity on the part of the development and QA teams, and the reports may not be as impressive as those of a third-party monitoring tool, but the insights are just as valid.

Here are a few examples:

- Netstat (netstat.exe) can be used to uncover a variety of network-related issues, from connection to session-state problems.

- Perfmon (perfmon.exe) among its many functions, is a great tool to identify application memory leaks.

Tracert (tracert.com) can determine the number of hops the application takes in transit over a network.
 
- Arp (apr.exe) displays and modifies entries in the Address Resolution Protocol (ARP) cache, which contains one or more tables that are used to store IP addresses and their resolved Ethernet or Token Ring physical addresses

- NSLookup (nslookup.exe) displays information that you can use to diagnose Domain Name System (DNS) infrastructure.

In many cases, these tools can be used in tandem with third-party monitors to provide greater insight into one of the hottest trends in application performance management (APM)today - end-user-experience (EUE) monitoring.

Monitoring in APM tools serves a variety of functions, one of which is isolation of inconsistencies and latencies that occur as real users interact with applications and their services. For example, APM vendor AppDynamics offers a monitoring tool that can instantly show the complete code execution and timing of slow user requests or business transactions for any Java or .NET application.

Fully automated (e.g. agent-based) monitors such as these are not yet common for the development and QA cycles, although we expect them to become so in the future. However, with a little ingenuity, there is no reason teams cannot use monitoring results, often paired with user feedback, to isolate and resolve user experience problems, transaction disruptions and other detriments to application performance. 

Where’s the User?

Depending on the types of tests you run, you may be saying, “we do not have access to real users.” Although advanced performance monitoring with live users provides some of the most compelling examples of how monitoring can be used in QA, application testing that involves user simulation or virtualization can offer a similar benefit.

For example, a tester that asks a team member at a distributed location to run an application on the devices at that location is working, in theory, with a real user. If he is using monitoring tools to capture the activity that occurs during these sessions rather than simply recording the other team member’s perceptions, he has a lot more data to work with.

To make user monitoring even more valuable to QA efforts – especially to validate multiple sessions and pinpoint discrepancies - organizations can work with real-world users. Many cloud-based testing solutions give organizations access to a wide array of networks and geographies where firms can set up their own real-world user base with employees, business partners, vendors and even pre-screened customers.

Alternately, enterprises are adopting crowdsourced testing as a tool for capturing large datasets of information about user experiences. This doesn’t mean they are returning to the uncontrolled findings that often result from traditional alpha and beta testing. Rather, these enterprises work with crowdsource testing vendors who develop purpose-designed and customized pools of testers that have been carefully vetted and meet specific criteria.

With crowdsource testing, the users themselves provide feedback on their experiences and issues with - as well as objections to – an application’s features, functions, level of intuitiveness and other key success factors. It is currently possible for firms or their vendors to deploy monitoring tools in conjunction with these tests that can capture, display and analyze user feedback. At Orasi, we envision a time when such an approach will be commonplace.

The Final Outcome

Best-practices development and testing dictates that applications should undergo performance, functional and load tests that replicate, as closely as possible, real-world scenarios the applications will encounter. Using a variety of methods - from cloud-based service virtualization to crowdsource user testing, new approaches are enabling companies to achieve this goal, improving the quality of their applications while reducing time to market and capturing increased ROI.
 
Nevertheless, I submit that those who incorporate available monitoring and analysis tools with these tests can more easily attain their quality objectives. Now that you know the potential of application performance monitoring in QA, we hope you will promote this intriguing concept with management ―and start exploring how you and your team can integrate monitors into your QA efforts.



Read more: http://www.orasi.com/Pages/default.aspx
This content is made possible by a guest author, or sponsor; it is not written by and does not necessarily reflect the views of App Developer Magazine's editorial staff.

Subscribe to App Developer Magazine

Become a subscriber of App Developer Magazine for just $5.99 a month and take advantage of all these perks.

MEMBERS GET ACCESS TO

  • - Exclusive content from leaders in the industry
  • - Q&A articles from industry leaders
  • - Tips and tricks from the most successful developers weekly
  • - Monthly issues, including all 90+ back-issues since 2012
  • - Event discounts and early-bird signups
  • - Gain insight from top achievers in the app store
  • - Learn what tools to use, what SDK's to use, and more

    Subscribe here